Naive Bayes Algorithm
The Naïve Bayes Algorithm is a probabilistic classification technique based on the Bayes Theorem, which assumes that the features used for classification are independent of each other, given the class. This assumption of feature independence may not always hold true in real-world situations, which is why it is considered "naïve." Nevertheless, the algorithm has proven to be effective and efficient in many classification tasks, particularly in text categorization, spam filtering, and sentiment analysis.
The core principle of the Naïve Bayes Algorithm is to compute the conditional probability of each class given the input features, and then predict the class with the highest probability. The Bayes Theorem states that the probability of a class given the features is proportional to the probability of the features given the class, multiplied by the probability of the class. In practice, the algorithm calculates these probabilities using training data and applies them to classify new unseen data points. Despite its simplicity and the naivety of its independence assumption, the Naïve Bayes Algorithm often performs well, especially when the features are indeed conditionally independent, and has the advantage of being easy to implement and computationally efficient.
library(e1071)
x <- cbind(x_train,y_train)
# Fitting model
fit <-naiveBayes(y_train ~ ., data = x)
summary(fit)
# Predict Output
predicted= predict(fit,x_test)